ai-generated question
Lightweight Prompt Engineering for Cognitive Alignment in Educational AI: A OneClickQuiz Case Study
Yaacoub, Antoun, Assaghir, Zainab, Da-Rugna, Jérôme
The rapid integration of Artificial Intelligence (AI) into educational technology promises to revolutionize content creation and assessment. However, the quality and pedagogical alignment of AI-generated content remain critical challenges. This paper investigates the impact of lightweight prompt engineering strategies on the cognitive alignment of AI-generated questions within OneClickQuiz, a Moodle plugin leveraging generative AI. We evaluate three prompt variants-a detailed baseline, a simpler version, and a persona-based approach-across Knowledge, Application, and Analysis levels of Bloom's Taxonomy. Utilizing an automated classification model (from prior work) and human review, our findings demonstrate that explicit, detailed prompts are crucial for precise cognitive alignment. While simpler and persona-based prompts yield clear and relevant questions, they frequently misalign with intended Bloom's levels, generating outputs that are either too complex or deviate from the desired cognitive objective. This study underscores the importance of strategic prompt engineering in fostering pedagogically sound AI-driven educational solutions and advises on optimizing AI for quality content generation in learning analytics and smart learning environments.
- Europe > Croatia > Varaždin County > Varaždin (0.05)
- Europe > Switzerland (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (3 more...)
Assessing the Quality of AI-Generated Exams: A Large-Scale Field Study
Isley, Calvin, Gilbert, Joshua, Kassos, Evangelos, Kocher, Michaela, Nie, Allen, Brunskill, Emma, Domingue, Ben, Hofman, Jake, Legewie, Joscha, Svoronos, Teddy, Tuminelli, Charlotte, Goel, Sharad
While large language models (LLMs) challenge conventional methods of teaching and learning, they present an exciting opportunity to improve efficiency and scale high-quality instruction. One promising application is the generation of customized exams, tailored to specific course content. There has been significant recent excitement on automatically generating questions using artificial intelligence, but also comparatively little work evaluating the psychometric quality of these items in real-world educational settings. Filling this gap is an important step toward understanding generative AI's role in effective test design. In this study, we introduce and evaluate an iterative refinement strategy for question generation, repeatedly producing, assessing, and improving questions through cycles of LLM-generated critique and revision. We evaluate the quality of these AI-generated questions in a large-scale field study involving 91 classes -- covering computer science, mathematics, chemistry, and more -- in dozens of colleges across the United States, comprising nearly 1700 students. Our analysis, based on item response theory (IRT), suggests that for students in our sample the AI-generated questions performed comparably to expert-created questions designed for standardized exams. Our results illustrate the power of AI to make high-quality assessments more readily available, benefiting both teachers and students.
- Research Report > New Finding (1.00)
- Instructional Material > Course Syllabus & Notes (0.88)
The Potential of LLMs in Medical Education: Generating Questions and Answers for Qualification Exams
Zhu, Yunqi, Tang, Wen, Sun, Ying, Yang, Xuebing
Recent research on large language models (LLMs) has primarily focused on their adaptation and application in specialized domains. The application of LLMs in the medical field is mainly concentrated on tasks such as the automation of medical report generation, summarization, diagnostic reasoning, and question-and-answer interactions between doctors and patients. The challenge of becoming a good teacher is more formidable than that of becoming a good student, and this study pioneers the application of LLMs in the field of medical education. In this work, we investigate the extent to which LLMs can generate medical qualification exam questions and corresponding answers based on few-shot prompts. Utilizing a real-world Chinese dataset of elderly chronic diseases, we tasked the LLMs with generating open-ended questions and answers based on a subset of sampled admission reports across eight widely used LLMs, including ERNIE 4, ChatGLM 4, Doubao, Hunyuan, Spark 4, Qwen, Llama 3, and Mistral. Furthermore, we engaged medical experts to manually evaluate these open-ended questions and answers across multiple dimensions. The study found that LLMs, after using few-shot prompts, can effectively mimic real-world medical qualification exam questions, whereas there is room for improvement in the correctness, evidence-based statements, and professionalism of the generated answers. Moreover, LLMs also demonstrate a decent level of ability to correct and rectify reference answers. Given the immense potential of artificial intelligence in the medical field, the task of generating questions and answers for medical qualification exams aimed at medical students, interns and residents can be a significant focus of future research.
- Health & Medicine > Therapeutic Area (1.00)
- Education > Educational Setting > Higher Education (0.63)
Using AI-generated questions to train NLP systems
A recent approach to the popular extractive question answering (extractive QA) task that generates its own training data instead of requiring existing annotated question answering examples. Extractive QA is a popular task for natural language processing (NLP) research, where models must extract a short snippet from a document in order to answer a natural language question. Though supervised models perform well at extractive QA, they require thousands -- sometimes hundreds of thousands -- of annotated examples for training, and their performance suffers when tested outside of the textual domains and language they were trained on. By approaching extractive QA as a self-supervised task, our technique outperformed early supervised models on the widely used SQuAD data set while requiring no annotated question answering training data. The code for our method is now available to download.